PID-Based Approach to Adversarial Attacks

نویسندگان

چکیده

Adversarial attack can misguide the deep neural networks (DNNs) with adding small-magnitude perturbations to normal examples, which is mainly determined by gradient of loss function respect inputs. Previously, various strategies have been proposed enhance performance adversarial attacks. However, all these methods only utilize gradients in present and past generate examples. Until now, trend change future (i.e., derivative gradient) has not considered yet. Inspired classic proportional-integral-derivative (PID) controller field automatic control, we propose a new PID-based approach for generating The past, are our method, correspond components P, I D PID controller, respectively. Extensive experiments consistently demonstrate that method achieve higher success rates exhibit better transferability compared state-of-the-art gradient-based Furthermore, possesses good extensibility be applied almost available

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN’s resilience to adversarial attacks, namely, adversarial training. Our experiments show that d...

متن کامل

Adversarial Attacks on Image Recognition

The purpose of this project is to extend the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We investigated whether a reduction in feature dimensionality can maintain a comparable level of misclassification success while increasing computational efficiency. We formed an attack on a black-box model with an unknown training set by forcing the oracle to misclassif...

متن کامل

Boosting Adversarial Attacks with Momentum

Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of the existing adversarial attacks can only fool a black-box model with a low success rate because...

متن کامل

Adversarial Attacks and Defences Competition

To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the Alexey Kurakin, Ian Goodfellow, Samy Bengio Google Brain Yinpeng Dong, Fangzhou Lia...

متن کامل

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such “adversa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i11.17204